Data representation refers to the way that values are stored in a computer. For technical reasons, computers do not use the familiar base-10 number system but rather avail themselves of the base-2 (binary) system. Under this paradigm, numbers are represented as 1's and 0's.
When storing an integer value, there are two ways to represent it - signed and unsigned - depending on whether the value should be entirely non-negative or may also have a "-" sign. Based on the number of bits used for storing a value, the value may have a different range.
Size | Range Size | Unsigned Range | Signed Range |
---|---|---|---|
Byte (8 bits) | |||
Word (16 bits) | |||
Doubleword (32 bits) | |||
Quadword (64 bits) | |||
Double Quadword (128 bits) |
Unsigned integers are represented in their typical binary form.
Signed integers are represented using two's complement. In order to convert a acquire the negative form of a number in two's complement, is two negate all of its bits and add 1 to the number. A corollary of this representation is that it adds no complexity to the addition and subtraction operations.